UMBC High Performance Computing Facility : Compiling C Programs on HPC
This page last changed on Jan 19, 2009 by straha1.
For other languages, see these pages: Table of ContentsHello WorldThere are two compilers available on HPC, the GNU Compiler Collection (GCC) and the Portland Group, Inc. (PGI) compiler. We will use the PGI compiler for this tutorial. If you wish to learn about the differences between the two, see this page. Below is a sample C Hello World and instructions on how to compile it. For other languages, see these pages: C++ Hello World Program, Fortran 77 Hello World Program and Fortran 90 Hello World Program. #include <stdio.h> int main(int argc,char **argv) { printf("Greetings, Earthling mortals!\n"); return 0; } To compile this with the GCC C compiler and create the executable helloworld-c-gcc, type gcc helloworld.c -o helloworld-c-gcc To compile with the PGI C compiler and create helloworld-c-pgi, type pgcc helloworld.c -o helloworld-c-pgi Run this program with either ./helloworld-c-gcc (for GCC) or ./helloworld-c-pgi (for PGI) to produce the message: Greetings, Earthling mortals! Parallel MPI Hello WorldChoose a Compiler+MPI CombinationWhen you compile MPI programs, the compiler needs a whole bunch of extra command line options to tell it where to find the mpi libraries and which libraries to link to. Fortunately you don't have to worry about that since the MPI implementations provide scripts that call the compiler for you. These scripts are mpicc (for C), mpicxx (for C++), mpif90 (for Fortran 90), and mpif77 (for Fortran 77). Also, in order to successfully compile or run any MPI program, you must have your PATH, LD_LIBRARY_PATH and other options set correctly so that your shell can find the wrapper script and the MPI libraries. The HPC cluster has a program called Switcher that does that for you. Thus the first thing you have to do is have Switcher change the compiler+MPI combination you're using. To see the available compiler+MPI options, run switcher mpi --list which should produce: pgi-mvapich-1.1.0 pgi-openmpi-1.2.8 gcc-mvapich2-1.2p1 pgi-mvapich2-1.2p1 gcc-openmpi-1.2.8 gcc-mvapich-1.1.0 Note that each MPI implementation (MVAPICH, MVAPICH2 and OpenMPI) has two different switcher mpi settings, one for GCC and one for PGI. This is because the MPI libraries you use have to be compiled with the same compiler as your code. That means you have to use switcher to pick one of those six combinations before compiling and running your program. For the purposes of this tutorial, we will use pgi-openmpi-1.2.8. The instructions in this tutorial are also correct for gcc-openmpi-1.2.8. We are using OpenMPI for this tutorial since it is the easiest MPI implementation to use. The other two implementations are faster, so you should check out our MVAPICH1 and MVAPICH2 pages once you work through this tutorial. To switch to the pgi-openmpi-1.2.8 compiler+MPI combination, type: switcher mpi = pgi-openmpi-1.2.8 If you wish to use the GCC version of OpenMPI instead, type: switcher mpi = gcc-openmpi-1.2.8 Make sure you remember the spaces before and after the equal sign. Switcher may ask you if you're sure you want to replace your old MPI setting. If it does, type y and hit enter. After you do that, you must log out and log back in for the setting to take effect. If you don't, a variety of strange things might happen. After you log out and log back in, you can type switcher mpi --show to see which MPI implementation you're using. If you type that, you should see: user:default=pgi-openmpi-1.2.8 user:exists=1 The user:default=pgi-openmpi-1.2.8 tells you which MPI+compiler you're using while user:exists=1 merely means you've chosen an MPI+compiler combination. For your own programs, you may wish to use a different compiler or MPI implementation. If so, you can find details about the differences between the options on this page.
Parallel Hello World in CFor other languages, see these pages: C++ Parallel Hello World, Fortran 77 Parallel Hello World and Fortran 90 Parallel Hello World. Here's a program that will have all processes print out their rank and name, and the number of processes. This program will compile, link and run with any of the six MPI+compiler options. We'll use pgi-openmpi-1.2.8, which you just set up in the previous section. /* Include the MPI library definitons: */ #include <mpi.h> #include <string.h> #include <stdio.h> int main (int argc, char *argv[]) { int id, np; char name[MPI_MAX_PROCESSOR_NAME]; int namelen; /* Initialize the MPI library: */ MPI_Init (&argc, &argv); /* Get the number of processors this job is using: */ MPI_Comm_size (MPI_COMM_WORLD, &np); /* Get the rank of the processor this thread is running on. (Each processor has a unique rank.) */ MPI_Comm_rank (MPI_COMM_WORLD, &id); /* Get the name of this processor (usually the hostname). We call memset to ensure the string is null-terminated. Not all MPI implementations null-terminate the processor name since the MPI standard specifies that the name is *not* supposed to be returned null-terminated. */ memset(name,0,MPI_MAX_PROCESSOR_NAME); MPI_Get_processor_name (name, &namelen); memset(name+namelen,0,MPI_MAX_PROCESSOR_NAME-namelen); printf ("hello_parallel.c: Number of tasks=%d My rank=%d My name=\"%s\".\n", np, id, name); system("/bin/hostname"); /* Tell the MPI library to release all resources it is using: */ MPI_Finalize (); return 0; } If you put that program in a file hello_world.c then you can compile it using the commands: mpicc hello_parallel.c -c -o hello_parallel.o mpicc hello_parallel.o -o hello_parallel Running mpicc program tells mpicc to call pgcc with the appropriate options to allow your program to compile and link with the OpenMPI libraries. The first command will compile your code and create a non-executable file hello_parallel.o that contains your program re-expressed in machine code. The second command links hello_world.o with the OpenMPI libraries to create an executable called hello_parallel. You can do all of that in a single command instead by typing: mpicc hello_parallel.c -o hello_parallel Those exact same commands will work for any of the other MPI+compiler combinations as long as you've used Switcher to chose the appropriate combination. Other MPI implementations' mpicc wrappers work the same way as pgi-openmpi-1.2.8's, except they use gcc or pgcc with the correct options for MVAPICH, MVAPICH2 or OpenMPI. To learn how to run this program, continue on to this page. If you intend to create more complex programs that use multiple source files, you will need to learn how to link programs. (For example, if you have one source file with your main program and three more files with subroutines needed by the main program.) To learn about linking MPI and non-MPI programs, go to this page. |
Document generated by Confluence on Mar 31, 2011 15:37 |